620 research outputs found

    Neutron-powered precursors of kilonovae

    Full text link
    The merger of binary neutron stars (NSs) ejects a small quantity of neutron rich matter, the radioactive decay of which powers a day to week long thermal transient known as a kilonova. Most of the ejecta remains sufficiently dense during its expansion that all neutrons are captured into nuclei during the r-process. However, recent general relativistic merger simulations by Bauswein and collaborators show that a small fraction of the ejected mass (a few per cent, or ~1e-4 Msun) expands sufficiently rapidly for most neutrons to avoid capture. This matter originates from the shocked-heated interface between the merging NSs. Here we show that the beta-decay of these free neutrons in the outermost ejecta powers a `precursor' to the main kilonova emission, which peaks on a timescale of a few hours following merger at U-band magnitude ~22 (for an assumed distance of 200 Mpc). The high luminosity and blue colors of the neutron precursor render it a potentially important counterpart to the gravitational wave source, that may encode valuable information on the properties of the merging binary (e.g. NS-NS versus NS-black hole) and the NS equation of state. Future work is necessary to assess the robustness of the fast moving ejecta and the survival of free neutrons in the face of neutrino absorptions, although the precursor properties are robust to a moderate amount of leptonization. Our results provide additional motivation for short latency gravitational wave triggers and rapid follow-up searches with sensitive ground based telescopes.Comment: 6 pages, 5 figures, accepted to MNRAS main journa

    Self-Learning Cloud Controllers: Fuzzy Q-Learning for Knowledge Evolution

    Get PDF
    Cloud controllers aim at responding to application demands by automatically scaling the compute resources at runtime to meet performance guarantees and minimize resource costs. Existing cloud controllers often resort to scaling strategies that are codified as a set of adaptation rules. However, for a cloud provider, applications running on top of the cloud infrastructure are more or less black-boxes, making it difficult at design time to define optimal or pre-emptive adaptation rules. Thus, the burden of taking adaptation decisions often is delegated to the cloud application. Yet, in most cases, application developers in turn have limited knowledge of the cloud infrastructure. In this paper, we propose learning adaptation rules during runtime. To this end, we introduce FQL4KE, a self-learning fuzzy cloud controller. In particular, FQL4KE learns and modifies fuzzy rules at runtime. The benefit is that for designing cloud controllers, we do not have to rely solely on precise design-time knowledge, which may be difficult to acquire. FQL4KE empowers users to specify cloud controllers by simply adjusting weights representing priorities in system goals instead of specifying complex adaptation rules. The applicability of FQL4KE has been experimentally assessed as part of the cloud application framework ElasticBench. The experimental results indicate that FQL4KE outperforms our previously developed fuzzy controller without learning mechanisms and the native Azure auto-scaling

    Variance of ML-based software fault predictors: are we really improving fault prediction?

    Full text link
    Software quality assurance activities become increasingly difficult as software systems become more and more complex and continuously grow in size. Moreover, testing becomes even more expensive when dealing with large-scale systems. Thus, to effectively allocate quality assurance resources, researchers have proposed fault prediction (FP) which utilizes machine learning (ML) to predict fault-prone code areas. However, ML algorithms typically make use of stochastic elements to increase the prediction models' generalizability and efficiency of the training process. These stochastic elements, also known as nondeterminism-introducing (NI) factors, lead to variance in the training process and as a result, lead to variance in prediction accuracy and training time. This variance poses a challenge for reproducibility in research. More importantly, while fault prediction models may have shown good performance in the lab (e.g., often-times involving multiple runs and averaging outcomes), high variance of results can pose the risk that these models show low performance when applied in practice. In this work, we experimentally analyze the variance of a state-of-the-art fault prediction approach. Our experimental results indicate that NI factors can indeed cause considerable variance in the fault prediction models' accuracy. We observed a maximum variance of 10.10% in terms of the per-class accuracy metric. We thus, also discuss how to deal with such variance

    An AI Chatbot for Explaining Deep Reinforcement Learning Decisions of Service-oriented Systems

    Full text link
    Deep Reinforcement Learning (Deep RL) is increasingly used to cope with the open-world assumption in service-oriented systems. Deep RL was successfully applied to problems such as dynamic service composition, job scheduling, and offloading, as well as service adaptation. While Deep RL offers many benefits, understanding the decision-making of Deep RL is challenging because its learned decision-making policy essentially appears as a black box. Yet, understanding the decision-making of Deep RL is key to help service developers perform debugging, support service providers to comply with relevant legal frameworks, and facilitate service users to build trust. We introduce Chat4XAI to facilitate the understanding of the decision-making of Deep RL by providing natural-language explanations. Compared with visual explanations, the reported benefits of natural-language explanations include better understandability for non-technical users, increased user acceptance and trust, as well as more efficient explanations. Chat4XAI leverages modern AI chatbot technology and dedicated prompt engineering. Compared to earlier work on natural-language explanations using classical software-based dialogue systems, using an AI chatbot eliminates the need for eliciting and defining potential questions and answers up-front. We prototypically realize Chat4XAI using OpenAI's ChatGPT API and evaluate the fidelity and stability of its explanations using an adaptive service exemplar.Comment: To be published at 21st Int'l Conference on Service-Oriented Computing (ICSOC 2023), Rome, Italy, November 28-December 1, 2023, ser. LNCS, F. Monti, S. Rinderle-Ma, A. Ruiz Cortes, Z. Zheng, M. Mecella, Eds., Springer, 202

    On contact between curves and rigid surfaces – from verification of the euler-eytelwein problem to knots

    Get PDF
    A general theory for the Curve-To-Curve contact is applied to develop a special contact algorithm between curves and rigid surfaces. In this case contact kinematics are formulated in the local coordinate system attached to the curve, however, contact is defined at integration points of the curve line (Mortar type contact). The corresponding Closest Point Projection (CPP) procedure is used to define then a shortest distance between the integration point on a curve and the rigid surface. For some simple approximations of the rigid surface closed form solutions are possible. Within the finite element implementation the isogeometric approach is used to model curvilinear cables and the rigid surfaces can be defined in general via NURB surface splines. Verification of the finite element algorithm is given using the well-known analytical solution of the Euler-Eytelwein problem – a rope on a cylindrical surface. The original 2D formula is generalized into the 3D case considering an additional parameter H-pitch for the helix. Finally, applications to knot mechanics are shown

    Towards the decentralized coordination of multiple self-adaptive systems

    Full text link
    When multiple self-adaptive systems share the same environment and have common goals, they may coordinate their adaptations at runtime to avoid conflicts and to satisfy their goals. There are two approaches to coordination. (1) Logically centralized, where a supervisor has complete control over the individual self-adaptive systems. Such approach is infeasible when the systems have different owners or administrative domains. (2) Logically decentralized, where coordination is achieved through direct interactions. Because the individual systems have control over the information they share, decentralized coordination accommodates multiple administrative domains. However, existing techniques do not account simultaneously for both local concerns, e.g., preferences, and shared concerns, e.g., conflicts, which may lead to goals not being achieved as expected. Our idea to address this shortcoming is to express both types of concerns within the same constraint optimization problem. We propose CoADAPT, a decentralized coordination technique introducing two types of constraints: preference constraints, expressing local concerns, and consistency constraints, expressing shared concerns. At runtime, the problem is solved in a decentralized way using distributed constraint optimization algorithms implemented by each self-adaptive system. As a first step in realizing CoADAPT, we focus in this work on the coordination of adaptation planning strategies, traditionally addressed only with centralized techniques. We show the feasibility of CoADAPT in an exemplar from cloud computing and analyze experimentally its scalability

    A User Study on Explainable Online Reinforcement Learning for Adaptive Systems

    Full text link
    Online reinforcement learning (RL) is increasingly used for realizing adaptive systems in the presence of design time uncertainty. Online RL facilitates learning from actual operational data and thereby leverages feedback only available at runtime. However, Online RL requires the definition of an effective and correct reward function, which quantifies the feedback to the RL algorithm and thereby guides learning. With Deep RL gaining interest, the learned knowledge is no longer explicitly represented, but is represented as a neural network. For a human, it becomes practically impossible to relate the parametrization of the neural network to concrete RL decisions. Deep RL thus essentially appears as a black box, which severely limits the debugging of adaptive systems. We previously introduced the explainable RL technique XRL-DINE, which provides visual insights into why certain decisions were made at important time points. Here, we introduce an empirical user study involving 54 software engineers from academia and industry to assess (1) the performance of software engineers when performing different tasks using XRL-DINE and (2) the perceived usefulness and ease of use of XRL-DINE.Comment: arXiv admin note: substantial text overlap with arXiv:2210.0593

    Revealing the radiative and non-radiative relaxation rates of the fluorescent dye Atto488 in a λ/2 Fabry-Pérot-resonator by spectral and time resolved measurements

    Get PDF
    Using a Fabry-Pérot-microresonator with controllable cavity lengths in the λ/2-regime, we show the controlled modification of the vibronic relaxation dynamics of a fluorescent dye molecule in the spectral and time domain. By altering the photonic mode density around the fluorophores we are able to shape the fluorescence spectrum and enhance specifically the probability of the radiative transitions from the electronic excited state to distinct vibronic excited states of the electronic ground state. Analysis and correlation of the spectral and time resolved measurements by a theoretical model and a global fitting procedure allows us to reveal quantitatively the spectrally distributed radiative and non-radiative relaxation dynamics of the respective dye molecule under ambient conditions at the ensemble level

    Study of congenital Morgagnian cataracts in Holstein calves

    Get PDF
    Cataracts are focal to diffuse opacities of the eye lens causing impaired vision or complete blindness. For bilateral congenital cataracts in Red Holsteins a perfectly cosegregating mutation within the CPAMD8 gene (CPAMD8:g.5995966C>T) has been reported. We genotyped the CPAMD8:g.5995966C>T variant in Holstein calves affected by congenital bilateral congenital cataracts, their unaffected relatives and randomly selected herd mates. Ophthalmological examinations were performed in all affected individuals to confirm a congenital cataract. Whole genome sequencing was employed to screen variants in candidate genes for the Morgagnian cataract phenotype. In the present study, 3/35 cases were confirmed as homozygous mutated and 6/14 obligate carriers. Further 7/46 unaffected animals related with these cases were heterozygous mutated for the CPAMD8:g.5995966C>T variant. However 32 cases with a congenital cataract showed the wild type for the CPAMD8 variant. We did not identify variants in the candidate genes CPAMD8 and NID1 or in their close neighborhood as strongly associated with the congenital cataract phenotype in Holstein calves with the CPAMD8 wild type. In conclusion, the CPAMD8:g.5995966C>T variant is insufficient to explain the majority of Morgagnian congenital cataract phenotypes in Holsteins. It is very likely that congenital bilateral cataracts may be genetically heterogeneous and not yet known variants in genes other than CPAMD8 and NID1 are involved

    Feature-Model-Guided Online Learning for Self-Adaptive Systems

    Full text link
    A self-adaptive system can modify its own structure and behavior at runtime based on its perception of the environment, of itself and of its requirements. To develop a self-adaptive system, software developers codify knowledge about the system and its environment, as well as how adaptation actions impact on the system. However, the codified knowledge may be insufficient due to design time uncertainty, and thus a self-adaptive system may execute adaptation actions that do not have the desired effect. Online learning is an emerging approach to address design time uncertainty by employing machine learning at runtime. Online learning accumulates knowledge at runtime by, for instance, exploring not-yet executed adaptation actions. We address two specific problems with respect to online learning for self-adaptive systems. First, the number of possible adaptation actions can be very large. Existing online learning techniques randomly explore the possible adaptation actions, but this can lead to slow convergence of the learning process. Second, the possible adaptation actions can change as a result of system evolution. Existing online learning techniques are unaware of these changes and thus do not explore new adaptation actions, but explore adaptation actions that are no longer valid. We propose using feature models to give structure to the set of adaptation actions and thereby guide the exploration process during online learning. Experimental results involving four real-world systems suggest that considering the hierarchical structure of feature models may speed up convergence by 7.2% on average. Considering the differences between feature models before and after an evolution step may speed up convergence by 64.6% on average. [...
    corecore